14 research outputs found
The Computational Structure of Spike Trains
Neurons perform computations, and convey the results of those computations
through the statistical structure of their output spike trains. Here we present
a practical method, grounded in the information-theoretic analysis of
prediction, for inferring a minimal representation of that structure and for
characterizing its complexity. Starting from spike trains, our approach finds
their causal state models (CSMs), the minimal hidden Markov models or
stochastic automata capable of generating statistically identical time series.
We then use these CSMs to objectively quantify both the generalizable structure
and the idiosyncratic randomness of the spike train. Specifically, we show that
the expected algorithmic information content (the information needed to
describe the spike train exactly) can be split into three parts describing (1)
the time-invariant structure (complexity) of the minimal spike-generating
process, which describes the spike train statistically; (2) the randomness
(internal entropy rate) of the minimal spike-generating process; and (3) a
residual pure noise term not described by the minimal spike-generating process.
We use CSMs to approximate each of these quantities. The CSMs are inferred
nonparametrically from the data, making only mild regularity assumptions, via
the causal state splitting reconstruction algorithm. The methods presented here
complement more traditional spike train analyses by describing not only spiking
probability and spike train entropy, but also the complexity of a spike train's
structure. We demonstrate our approach using both simulated spike trains and
experimental data recorded in rat barrel cortex during vibrissa stimulation.Comment: Somewhat different format from journal version but same conten
Discovering Functional Communities in Dynamical Networks
Many networks are important because they are substrates for dynamical
systems, and their pattern of functional connectivity can itself be dynamic --
they can functionally reorganize, even if their underlying anatomical structure
remains fixed. However, the recent rapid progress in discovering the community
structure of networks has overwhelmingly focused on that constant anatomical
connectivity. In this paper, we lay out the problem of discovering_functional
communities_, and describe an approach to doing so. This method combines recent
work on measuring information sharing across stochastic networks with an
existing and successful community-discovery algorithm for weighted networks. We
illustrate it with an application to a large biophysical model of the
transition from beta to gamma rhythms in the hippocampus.Comment: 18 pages, 4 figures, Springer "Lecture Notes in Computer Science"
style. Forthcoming in the proceedings of the workshop "Statistical Network
Analysis: Models, Issues and New Directions", at ICML 2006. Version 2: small
clarifications, typo corrections, added referenc
Measuring Shared Information and Coordinated Activity in Neuronal Networks
Most nervous systems encode information about stimuli in the responding
activity of large neuronal networks. This activity often manifests itself as
dynamically coordinated sequences of action potentials. Since multiple
electrode recordings are now a standard tool in neuroscience research, it is
important to have a measure of such network-wide behavioral coordination and
information sharing, applicable to multiple neural spike train data. We propose
a new statistic, informational coherence, which measures how much better one
unit can be predicted by knowing the dynamical state of another. We argue
informational coherence is a measure of association and shared information
which is superior to traditional pairwise measures of synchronization and
correlation. To find the dynamical states, we use a recently-introduced
algorithm which reconstructs effective state spaces from stochastic time
series. We then extend the pairwise measure to a multivariate analysis of the
network by estimating the network multi-information. We illustrate our method
by testing it on a detailed model of the transition from gamma to beta rhythms.Comment: 8 pages, 6 figure
Automatic Filters for the Detection of Coherent Structure in Spatiotemporal Systems
Most current methods for identifying coherent structures in
spatially-extended systems rely on prior information about the form which those
structures take. Here we present two new approaches to automatically filter the
changing configurations of spatial dynamical systems and extract coherent
structures. One, local sensitivity filtering, is a modification of the local
Lyapunov exponent approach suitable to cellular automata and other discrete
spatial systems. The other, local statistical complexity filtering, calculates
the amount of information needed for optimal prediction of the system's
behavior in the vicinity of a given point. By examining the changing
spatiotemporal distributions of these quantities, we can find the coherent
structures in a variety of pattern-forming cellular automata, without needing
to guess or postulate the form of that structure. We apply both filters to
elementary and cyclical cellular automata (ECA and CCA) and find that they
readily identify particles, domains and other more complicated structures. We
compare the results from ECA with earlier ones based upon the theory of formal
languages, and the results from CCA with a more traditional approach based on
an order parameter and free energy. While sensitivity and statistical
complexity are equally adept at uncovering structure, they are based on
different system properties (dynamical and probabilistic, respectively), and
provide complementary information.Comment: 16 pages, 21 figures. Figures considerably compressed to fit arxiv
requirements; write first author for higher-resolution version